63 research outputs found

    Updates by Reasoning about States

    Full text link
    It has been argued that some sort of control must be introduced in order to perform update operations in deductive databases. Indeed, many approaches rely on a procedural semantics of rule based languages and often perform updates as side-effects. Depending on the evaluation procedure, updates are generally performed in the body (top-down evaluation) or in the head of rules (bottom-up evaluation). We demonstrate that updates can be specified in a purely declarative manner using standard model based semantics without relying on procedural aspects of program evaluation. The key idea is to incorporate states as first-class objects into the language. This is the source of the additional expressiveness needed to define updates. We introduce the update language Statelog+-, discuss various domains of application and outline how to implement computation of the perfect model semantics for Statelog+- programs

    Capturing the "Whole Tale" of Computational Research: Reproducibility in Computing Environments

    Full text link
    We present an overview of the recently funded "Merging Science and Cyberinfrastructure Pathways: The Whole Tale" project (NSF award #1541450). Our approach has two nested goals: 1) deliver an environment that enables researchers to create a complete narrative of the research process including exposure of the data-to-publication lifecycle, and 2) systematically and persistently link research publications to their associated digital scholarly objects such as the data, code, and workflows. To enable this, Whole Tale will create an environment where researchers can collaborate on data, workspaces, and workflows and then publish them for future adoption or modification. Published data and applications will be consumed either directly by users using the Whole Tale environment or can be integrated into existing or future domain Science Gateways

    Reasoning over Taxonomic Change: Exploring Alignments for the Perelleschus Use Case

    Full text link
    Classifications and phylogenetic inferences of organismal groups change in light of new insights. Over time these changes can result in an imperfect tracking of taxonomic perspectives through the re-/use of Code-compliant or informal names. To mitigate these limitations, we introduce a novel approach for aligning taxonomies through the interaction of human experts and logic reasoners. We explore the performance of this approach with the Perelleschus use case of Franz & Cardona-Duque (2013). The use case includes six taxonomies published from 1936 to 2013, 54 taxonomic concepts (i.e., circumscriptions of names individuated according to their respective source publications), and 75 expert-asserted Region Connection Calculus articulations (e.g., congruence, proper inclusion, overlap, or exclusion). An Open Source reasoning toolkit is used to analyze 13 paired Perelleschus taxonomy alignments under heterogeneous constraints and interpretations. The reasoning workflow optimizes the logical consistency and expressiveness of the input and infers the set of maximally informative relations among the entailed taxonomic concepts. The latter are then used to produce merge visualizations that represent all congruent and non-congruent taxonomic elements among the aligned input trees. In this small use case with 6-53 input concepts per alignment, the information gained through the reasoning process is on average one order of magnitude greater than in the input. The approach offers scalable solutions for tracking provenance among succeeding taxonomic perspectives that may have differential biases in naming conventions, phylogenetic resolution, ingroup and outgroup sampling, or ostensive (member-referencing) versus intensional (property-referencing) concepts and articulations.Comment: 30 pages, 16 figure

    The First Provenance Challenge

    No full text
    The first Provenance Challenge was set up in order to provide a forum for the community to help understand the capabilities of different provenance systems and the expressiveness of their provenance representations. To this end, a Functional Magnetic Resonance Imaging workflow was defined, which participants had to either simulate or run in order to produce some provenance representation, from which a set of identified queries had to be implemented and executed. Sixteen teams responded to the challenge, and submitted their inputs. In this paper, we present the challenge workflow and queries, and summarise the participants contributions

    Scientific Workflows: Catalyzing the Grid ? Semantic Web Reaction

    No full text
    Scientific workflows allow scientists to automate repetitive data management, analysis, and visualization tasks, and to document the provenance of analysis results. Scientific workflows are composed of interlinked computational components (sometimes called actors), and the datasets that are consumed and produced by those components. Scientific workflow systems are problem-solving environments to design, reuse, share, execute, monitor, and archive scientific workflows. As such, they are the primary tool that end user scientists use when interacting with the emerging e-Science cyberinfrastucture. Scientific workflow systems can often benefit from both, Grid and Semantic Web capabilities. Thus, scientific workflows can bring together these otherwise loosely connected technologies and "catalyze the reaction" between them
    • ā€¦
    corecore